The "interpretation through synthesis" approach to analyze face images,particularly Active Appearance Models (AAMs) method, has become one of the mostsuccessful face modeling approaches over the last two decades. AAM models haveability to represent face images through synthesis using a controllableparameterized Principal Component Analysis (PCA) model. However, the accuracyand robustness of the synthesized faces of AAM are highly depended on thetraining sets and inherently on the generalizability of PCA subspaces. Thispaper presents a novel Deep Appearance Models (DAMs) approach, an efficientreplacement for AAMs, to accurately capture both shape and texture of faceimages under large variations. In this approach, three crucial componentsrepresented in hierarchical layers are modeled using the Deep BoltzmannMachines (DBM) to robustly capture the variations of facial shapes andappearances. DAMs are therefore superior to AAMs in inferencing arepresentation for new face images under various challenging conditions. Theproposed approach is evaluated in various applications to demonstrate itsrobustness and capabilities, i.e. facial super-resolution reconstruction,facial off-angle reconstruction or face frontalization, facial occlusionremoval and age estimation using challenging face databases, i.e. Labeled FaceParts in the Wild (LFPW), Helen and FG-NET. Comparing to AAMs and other deeplearning based approaches, the proposed DAMs achieve competitive results inthose applications, thus this showed their advantages in handling occlusions,facial representation, and reconstruction.
展开▼